Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Eliminate OpenAI -> Gemini Model Mapping #38

Closed
wants to merge 4 commits into from

Conversation

ekatiyar
Copy link
Contributor

@ekatiyar ekatiyar commented Aug 2, 2024

Addresses #35

This change eliminates the mapping of openAI models to gemini models and directly exposes the underlying gemini models to the api endpoints directly. This was motivated by my own issues with using Google's openAI API Compatible Endpoint.

My own usecase is flexible enough to use non-openAI named models with the OpenAI API, so the model mapping from openAI model names to gemini model names just serves to confuse things.

I recognize this may not be the case for all users of this project and some clients may only be able to handle OpenAI-named models; in that case feel free to reject this PR.

@ekatiyar
Copy link
Contributor Author

ekatiyar commented Aug 2, 2024

If this is something you are interested in, I can create a separate branch without the readme changes

Gemini-OpenAI-Proxy is a proxy designed to convert the OpenAI API protocol to the Google Gemini Pro protocol. This enables seamless integration of OpenAI-powered functionalities into applications using the Gemini Pro protocol.
Gemini-OpenAI-Proxy is a proxy designed to convert the OpenAI API protocol to the Google Gemini protocol. This enables applications built for the OpenAI API to seamlessly communicate with the Gemini protocol, including support for Chat Completion, Embeddings, and Model(s) endpoints.

This is a fork of zhu327/gemini-openai-proxy that eliminates the mapping of openAI models to gemini models and directly exposes the underlying gemini models to the api endpoints directly. I've also added support for Google's embeddings model. This was motivated by my own issues with using Google's [openAI API Compatible Endpoint](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/call-gemini-using-openai-library).
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove the steps about forking from the README.

```bash
docker run --restart=always -it -d -p 8080:8080 --name gemini zhu327/gemini-openai-proxy:latest
```
You can either do this on the command line:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GitHub Actions will automatically build Docker images, so we can just use the existing image here.

@ekatiyar
Copy link
Contributor Author

ekatiyar commented Aug 7, 2024

This is just the main branch of my fork, which is kinda messy since I've added some other stuff since I opened this pr. I can reorganize it so its easier to merge into your repo, but to start with I'd think I'd want to merge in the embeddings support I added in #39 , and then create a separate branch for eliminating the model mapping

@ekatiyar
Copy link
Contributor Author

ekatiyar commented Aug 8, 2024

Closing this PR in favor of merging in #41 which accomplishes the same thing in a cleaner and non-disruptive way

@ekatiyar ekatiyar closed this Aug 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants